Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
OTO Open ; 8(1): e118, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38504881

RESUMO

Objective: To understand the quality of informational Graves' disease (GD) videos on YouTube for treatment decision-making quality and inclusion of American Thyroid Association (ATA) treatment guidelines. Study Design: Cross-sectional cohort. Setting: Informational YouTube videos with subject matter "Graves' Disease treatment." Method: The top 50 videos based on our query were assessed using the DISCERN instrument. This validated algorithm discretely rates treatment-related information from excellent (≥4.5) to very poor (<1.9). Videos were also screened for ATA guideline inclusion. Descriptive statistics were used for cohort characterization. Univariate and multivariate linear regressions characterized factors associated with DISCERN scores. Significance was set at P < .05. Results: The videos featured 57,513.43 views (SD = 162,579.25), 1054.70 likes (SD = 2329.77), and 168.80 comments (SD = 292.97). Most were patient education (52%) or patient experience (24%). A minority (40%) were made by thyroid specialists (endocrinologists, endocrine surgeons, or otolaryngologists). Under half did not mention all 3 treatment modalities (44%), and 54% did not mention any ATA recommendations. Overall, videos displayed poor reliability (mean = 2.26, SD = 0.67), treatment information quality (mean = 2.29, SD = 0.75), and overall video quality (mean = 2.47, SD = 1.07). Physician videos were associated with lower likes, views, and comments (P < .001) but higher DISCERN reliability (P = .015) and overall score (P = .019). Longer videos (P = .015), patient accounts (P = .013), and patient experience (P = .002) were associated with lower scores. Conclusion: The most available GD treatment content on YouTube varies significantly in the quality of medical information. This may contribute to suboptimal disease understanding, especially for patients highly engaged with online health information sources.

2.
OTO Open ; 8(1): e113, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38299048

RESUMO

Objective: This study aimed to characterize the quality of laryngectomy-related patient education on YouTube and understand factors impacting video content quality. Study Design: Cross-sectional cohort analysis. Setting: Laryngectomy-related videos on YouTube. Methods: YouTube was anonymously queried for various laryngectomy procedure search terms. Video quality was evaluated using the validated DISCERN instrument which assesses treatment-related information quality. Descriptive statistics were used to characterize our cohort. Univariate and multivariable linear regression were used to assess factors associated with increased DISCERN score. Significance was set at P < .05. Results: Our 78-video cohort exhibited moderate levels of engagement, averaging 13,028.40 views (SD = 24,246.93), 69.79 likes (SD = 163.75), and 5.27 comments (SD = 18.81). Videos were most frequently uploaded to accounts belonging to physicians (43.59%) or health care groups (41.03%) and showcased operations (52.56%) or physician-led education (20.51%). Otolaryngologists were featured in most videos (85.90%), and most videos originated outside the United States (67.95%). Laryngectomy videos demonstrated poor reliability (mean = 2.35, SD = 0.77), quality of treatment information (mean = 1.92, SD = 0.86), and overall video quality (mean = 1.97, SD = 1.12). In multivariable linear regression, operative videos were associated with lower video quality relative to nonoperative videos (ß = -1.63, 95% confidence interval [CI] = [-2.03 to -1.24], P < .001); the opposite was true for videos from accounts with higher subscriber counts (ß = 0.02, 95% CI = [0.01-0.03], P = .005). Conclusion: The quality and quantity of YouTube's laryngectomy educational content is limited. There is an acute need to increase the quantity and quality of online laryngectomy-related content to better support patients and caregivers as they cope with their diagnosis, prepare for, and recover from surgery.

3.
Artigo em Inglês | MEDLINE | ID: mdl-38327242

RESUMO

INTRODUCTION: Gay and bisexual males and other LGBTQ+ communities are more frequently exposed to factors associated with an increased risk of human papillomavirus (HPV) acquisition. Vaccination is critical to protect against HPV+ head and neck cancer (HNC). We characterized the association of perceived level of risk of contraction with HPV knowledge, and vaccine decision-making. STUDY DESIGN: Cross-sectional cohort. SETTING: LGBTQ and general survey Reddit forums (control). METHODS: A survey was shared amongst the online forums. Descriptive statistics characterized the data. Multivariable logistic regression was used to understand factors associated with vaccination, self-perceived high risk, and knowledge of HPV + HNC. RESULTS: Of 718 respondents, most were female (41.09%), Caucasian (59.89%), college-educated (33.01%), and insured (77.15%) with a mean age of 30.75 years. Half were vaccinated (49.16%), with most unvaccinated endorsing interest (60.58%). Few dependents were vaccinated (25.91%), with interest in vaccination among parents of unvaccinated children (38.58%). Knowledge of HIV's association with HPV (62.95%), HPV causing HNC (55.57%), and the vaccine's efficacy against HNC (55.57%) was also moderate. Identifying female (P = .042), a self-perceived high-risk (P < .001), and having vaccinated children (P < .001) increased vaccination likelihood; transgender (P = .021), or lesbian or gay sexual identity (P < .001) decreased likelihood. Personal HNC diagnosis (P < .001), self-vaccination (P < .001), having vaccinated children (P < .001), having anal sex (P = .001) or no knowledge of past HPV status (P < .001) increased likelihood of high self-perceived risk. CONCLUSION: Efforts to improve public education regarding the association between HPV and HNC and vaccination efficacy are required to better inform vaccine decision-making among individuals at risk for HPV infection.

4.
Otolaryngol Head Neck Surg ; 170(3): 776-787, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-37811692

RESUMO

OBJECTIVE: Investigate the prevalence of hearing protection (HP) use and behavioral motivations and barriers among adults attending music venues. STUDY DESIGN: Cross-sectional online survey study. SETTING: Noise exposure levels at popular social music venues often exceed national guidelines. METHODS: Surveys were distributed on online music communities. Respondents (n = 2352) were asked about demographics, HP use at music venues, knowledge about noise exposure impact, and perceptions of HP use. Data were characterized through descriptive statistics. Multivariable regression analysis explored differences in knowledge and perception between HP users and nonusers. RESULTS: In this cohort (mean age 29 ± 7 years, 61% male), HP users were significantly more aware of the impact of music venues on hearing (P < .01), believed their hearing ability had decreased after attending music venues (P < 0.01), and believed HP could protect from hearing loss (P < .01) than non-HP users. HP nonusers most frequently cited never considering HP (14.45%) and apathy about it affecting music quality (12.71%). Common sources of HP information were recommended by a friend/peer. Multivariable regression analysis accounting for demographics, medical history, and attendance characteristics found belief that HP use at music venues could protect from hearing loss (ß = 0.64, 95% confidence interval [CI] = [0.49-0.78]) and HP use (ß = 1.73, 95% CI = [1.47-1.98]) were significantly associated with increased subjective enjoyment while wearing HP. CONCLUSION: HP users and nonusers have significantly different perceptions of HP use and its impact. Our findings have implications for understanding motivations and barriers related to HP use and developing strategies to promote HP use at music venues.


Assuntos
Surdez , Perda Auditiva Provocada por Ruído , Música , Adulto , Humanos , Masculino , Adulto Jovem , Feminino , Perda Auditiva Provocada por Ruído/prevenção & controle , Perda Auditiva Provocada por Ruído/epidemiologia , Estudos Transversais , Testes Auditivos , Audição
5.
Laryngoscope Investig Otolaryngol ; 8(6): 1685-1691, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-38130243

RESUMO

Objective: To evaluate the quality of thyroidectomy-related posts on TikTok, the fastest-growing social media platform worldwide. Methods: Videos posted from April 2020 to September 2022 were queried on TikTok using the search terms "thyroidsurgery," "thyroidectomy," and "thyroidremoval." Two reviewers recorded thematic, demographic, and performance data of these posts. The DISCERN instrument was used to evaluate the quality and reliability of the information contained in the videos. Descriptive statistics were used to characterize post-submitter demographics and video content. Simple and multiple linear regression analyses were used to evaluate the association between DISCERN scores and video characteristics. Univariate analysis of variance was performed to compare DISCERN scores between author types. Results: In this study, 228 TikTok videos were included which totaled over 23 million views. On average, each video accumulated more than 6000 "likes," 300 comments, and 70 shares. The average total DISCERN score was 27.46, which is deemed to be of poor overall quality. Upon multiple linear regression, video duration (ß = 4.66, p < .001) and educational subject type (ß = 3.97, p < .001) significantly positively predicted aggregate DISCERN scores, while journey subject type (ß = -3.19, p = .006), and reassurance subject type (ß = -2.52, p = .035) significantly negatively predicted aggregate DISCERN scores. Aggregate DISCERN scores varied significantly (p < .05) between author types. Conclusion: Social media posts on TikTok about thyroidectomy are mostly of poor quality and reliability but vary by authorship, subject type, and video characteristics. Given its widespread popularity, TikTok videos may have an increasing role in shaping patient perception of thyroidectomy and may represent an opportunity to provide education. Lay summary: TikTok posts about thyroidectomy are mostly of poor quality but vary by authorship, subject, and video characteristics. Given its popularity, TikTok videos may have a role in shaping the patient perception of thyroidectomy and may represent an opportunity to provide education. Level of evidence: Level 4.

6.
Laryngoscope ; 2023 Nov 20.
Artigo em Inglês | MEDLINE | ID: mdl-37983846

RESUMO

OBJECTIVE: With burgeoning popularity of artificial intelligence-based chatbots, oropharyngeal cancer patients now have access to a novel source of medical information. Because chatbot information is not reviewed by experts, we sought to evaluate an artificial intelligence-based chatbot's oropharyngeal cancer-related information for accuracy. METHODS: Fifteen oropharyngeal cancer-related questions were developed and input into ChatGPT version 3.5. Four physician-graders independently assessed accuracy, comprehensiveness, and similarity to a physician response using 5-point Likert scales. Responses graded lower than three were then critiqued by physician-graders. Critiques were analyzed using inductive thematic analysis. Readability of responses was assessed using Flesch Reading Ease (FRE) and Flesch-Kincaid Reading Grade Level (FKRGL) scales. RESULTS: Average accuracy, comprehensiveness, and similarity to a physician response scores were 3.88 (SD = 0.99), 3.80 (SD = 1.14), and 3.67 (SD = 1.08), respectively. Posttreatment-related questions were most accurate, comprehensive, and similar to a physician response, followed by treatment-related, then diagnosis-related questions. Posttreatment-related questions scored significantly higher than diagnosis-related questions in all three domains (p < 0.01). Two themes of the physician critiques were identified: suboptimal education value and potential to misinform patients. The mean FRE and FKRGL scores both indicated greater than an 11th grade readability level-higher than the 6th grade level recommended for patients. CONCLUSION: ChatGPT responses may not educate patients to an appropriate degree, could outright misinform them, and read at a more difficult grade level than is recommended for patient material. As oropharyngeal cancer patients represent a vulnerable population facing complex, life-altering diagnoses, and treatments, they should be cautious when consuming chatbot-generated medical information. LEVEL OF EVIDENCE: N/A Laryngoscope, 2023.

8.
Urol Pract ; 10(5): 436-443, 2023 09.
Artigo em Inglês | MEDLINE | ID: mdl-37410015

RESUMO

INTRODUCTION: This study assessed ChatGPT's ability to generate readable, accurate, and clear layperson summaries of urological studies, and compared the performance of ChatGPT-generated summaries with original abstracts and author-written patient summaries to determine its effectiveness as a potential solution for creating accessible medical literature for the public. METHODS: Articles from the top 5 ranked urology journals were selected. A ChatGPT prompt was developed following guidelines to maximize readability, accuracy, and clarity, minimizing variability. Readability scores and grade-level indicators were calculated for the ChatGPT summaries, original abstracts, and patient summaries. Two MD physicians independently rated the accuracy and clarity of the ChatGPT-generated layperson summaries. Statistical analyses were conducted to compare readability scores. Cohen's κ coefficient was used to assess interrater reliability for correctness and clarity evaluations. RESULTS: A total of 256 journal articles were included. The ChatGPT-generated summaries were created with an average time of 17.5 (SD 15.0) seconds. The readability scores of the ChatGPT-generated summaries were significantly better than the original abstracts, with Global Readability Score 54.8 (12.3) vs 29.8 (18.5), Flesch Kincade Reading Ease 54.8 (12.3) vs 29.8 (18.5), Flesch Kincaid Grade Level 10.4 (2.2) vs 13.5 (4.0), Gunning Fog Score 12.9 (2.6) vs 16.6 (4.1), Smog Index 9.1 (2.0) vs 12.0 (3.0), Coleman Liau Index 12.9 (2.1) vs 14.9 (3.7), and Automated Readability Index 11.1 (2.5) vs 12.0 (5.7; P < .0001 for all except Automated Readability Index, which was P = .037). The correctness rate of ChatGPT outputs was >85% across all categories assessed, with interrater agreement (Cohen's κ) between 2 independent physician reviewers ranging from 0.76-0.95. CONCLUSIONS: ChatGPT can create accurate summaries of scientific abstracts for patients, with well-crafted prompts enhancing user-friendliness. Although the summaries are satisfactory, expert verification is necessary for improved accuracy.


Assuntos
Letramento em Saúde , Urologia , Humanos , Reprodutibilidade dos Testes , Compreensão , Idioma
9.
J Urol ; 210(4): 688-694, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-37428117

RESUMO

PURPOSE: The Internet is a ubiquitous source of medical information, and natural language processors are gaining popularity as alternatives to traditional search engines. However, suitability of their generated content for patients is not well understood. We aimed to evaluate the appropriateness and readability of natural language processor-generated responses to urology-related medical inquiries. MATERIALS AND METHODS: Eighteen patient questions were developed based on Google Trends and were used as inputs in ChatGPT. Three categories were assessed: oncologic, benign, and emergency. Questions in each category were either treatment or sign/symptom-related questions. Three native English-speaking Board-Certified urologists independently assessed appropriateness of ChatGPT outputs for patient counseling using accuracy, comprehensiveness, and clarity as proxies for appropriateness. Readability was assessed using the Flesch Reading Ease and Flesh-Kincaid Reading Grade Level formulas. Additional measures were created based on validated tools and assessed by 3 independent reviewers. RESULTS: Fourteen of 18 (77.8%) responses were deemed appropriate, with clarity having the most 4 and 5 scores (P = .01). There was no significant difference in appropriateness of the responses between treatments and symptoms or between different categories of conditions. The most common reason from urologists for low scores was responses lacking information-sometimes vital information. The mean (SD) Flesch Reading Ease score was 35.5 (SD=10.2) and the mean Flesh-Kincaid Reading Grade Level score was 13.5 (1.74). Additional quality assessment scores showed no significant differences between different categories of conditions. CONCLUSIONS: Despite impressive capabilities, natural language processors have limitations as sources of medical information. Refinement is crucial before adoption for this purpose.


Assuntos
Letramento em Saúde , Urologia , Humanos , Inteligência Artificial , Compreensão , Idioma , Internet
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...